|
In decision tree learning, Information gain ratio is a ratio of information gain to the intrinsic information. It is used to reduce a bias towards multi-valued attributes by taking the number and size of branches into account when choosing an attribute.〔http://www.ke.tu-darmstadt.de/lehre/archiv/ws0809/mldm/dt.pdf〕 == Information Gain Calculation == Let be the set of all attributes and the set of all training examples, with defines the value of a specific example for attribute , specifies the entropy. The function denotes set of all possible values of attribute . The information gain for an attribute is defined as follows: The information gain is equal to the total entropy for an attribute if for each of the attribute values a unique classification can be made for the result attribute. In this case the relative entropies subtracted from the total entropy are 0. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Information gain ratio」の詳細全文を読む スポンサード リンク
|